Skip to content

Add unit test coverage for utils/job module#759

Open
ethany-nv wants to merge 1 commit intomainfrom
ethany/add-utils-job-coverage
Open

Add unit test coverage for utils/job module#759
ethany-nv wants to merge 1 commit intomainfrom
ethany/add-utils-job-coverage

Conversation

@ethany-nv
Copy link
Copy Markdown
Collaborator

@ethany-nv ethany-nv commented Mar 31, 2026

Add 6 new test files covering previously untested code paths across common.py, backend_job_defs.py, task.py, workflow.py, kb_objects.py, kb_methods.py, jobs_base.py, app.py, and task_io.py. Tests cover model validation, enum methods, utility functions, and K8s object factory logic.

Description

Issue #<issue number | "None">

Checklist

  • I am familiar with the Contributing Guidelines.
  • New or existing tests cover these changes.
  • The documentation is up to date with these changes.

Summary by CodeRabbit

Release Notes

  • Tests

    • Added comprehensive unit test coverage for job utilities, task and workflow models, and Kubernetes cluster integrations.
    • Extended test suite validates configuration defaults, status classifications, and object construction behavior.
  • Chores

    • Updated build configuration to support expanded test infrastructure.

Add 6 new test files covering previously untested code paths across
common.py, backend_job_defs.py, task.py, workflow.py, kb_objects.py,
kb_methods.py, jobs_base.py, app.py, and task_io.py. Tests cover model
validation, enum methods, utility functions, and K8s object factory logic.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@ethany-nv ethany-nv requested a review from a team as a code owner March 31, 2026 00:33
@ethany-nv ethany-nv self-assigned this Mar 31, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 31, 2026

📝 Walkthrough

Walkthrough

Adding six new unit test modules to src/utils/job/tests/ with 1,195 test lines that validate existing job utilities including backend definitions, common helpers, jobs base functionality, Kubernetes objects, task models, and workflow models. Build configuration updated with corresponding test targets.

Changes

Cohort / File(s) Summary
Build Configuration
src/utils/job/tests/BUILD
Added six py_test targets for new test modules with dependencies on storage, osmo_errors, backend_job, connectors, priority, and jobs_base libraries.
Backend Job Utilities Tests
src/utils/job/tests/test_backend_job_defs.py, test_common_utils.py
Tests for backend_job_defs covering BackendCleanupSpec effective properties, BackendGenericApi/BackendCustomApi construction, and mixin defaults; tests for common module validating path builders, timeout calculation, barrier key formatting, and WorkflowPlugins configuration.
Core Job Functionality Tests
src/utils/job/tests/test_jobs_base_unit.py, test_kb_objects_unit.py
Tests for jobs_base enum values, JobResult retry logic, progress writer frequency checks, and job TTL constant; comprehensive tests for kb_objects covering Kubernetes naming, object factory manifest generation, feature flags, FileMount digest handling, and factory error conditions.
Model Validation Tests
src/utils/job/tests/test_task_models.py, test_workflow_models.py
Extensive tests for task models covering AppStatus, ExitAction/ExitCode enums, TaskGroupStatus lifecycle methods, login dict construction, name truncation, hstore encoding, input/output validation, and checkpoint/KPI specs; tests for workflow models validating status classification, action queue naming, timeout normalization, assertion rule partitioning, version constraints, and resource validation results.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hop, hop! Tests now bloom,
Six modules chase each room,
From jobs to workflow's height,
Coverage shines so bright!
Our rabbit warren's code takes flight!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Add unit test coverage for utils/job module' directly and accurately summarizes the main change: six new test files adding unit test coverage to the utils/job module.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ethany/add-utils-job-coverage

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 31, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 43.04%. Comparing base (9af7c66) to head (d93d9d3).

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #759      +/-   ##
==========================================
+ Coverage   42.84%   43.04%   +0.20%     
==========================================
  Files         203      203              
  Lines       26844    26844              
  Branches     7603     7606       +3     
==========================================
+ Hits        11500    11554      +54     
+ Misses      15233    15181      -52     
+ Partials      111      109       -2     
Flag Coverage Δ
backend 45.40% <ø> (+0.22%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.
see 22 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (3)
src/utils/job/tests/test_task_models.py (1)

345-345: Relocate method-local imports to top-level.

Imports at Line 345, Line 350, and Line 460 should be moved to module scope to comply with repository import rules.

As per coding guidelines, "All imports must be at the top level of the module after the module docstring. No exceptions: imports inside functions are not allowed."

Also applies to: 350-350, 460-460

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/tests/test_task_models.py` at line 345, Several tests in
test_task_models.py contain method-local imports (e.g., import base64 plus the
other imports flagged) that must be moved to module scope; relocate those
imports out of the test functions to the top of the module (immediately after
the module docstring) and remove the in-function import statements so functions
like the tests referencing base64 use the top-level imports instead.
src/utils/job/tests/BUILD (1)

137-144: Add explicit pyyaml dependency to test_workflow_models.

test_workflow_models.py imports yaml directly, so this target should declare requirement("pyyaml") explicitly to avoid strict-deps fragility.

Suggested fix
 py_test(
     name = "test_workflow_models",
     srcs = ["test_workflow_models.py"],
     deps = [
+        requirement("pyyaml"),
         "//src/utils/connectors",
         "//src/utils/job",
     ],
 )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/tests/BUILD` around lines 137 - 144, The py_test target named
"test_workflow_models" is missing an explicit runtime dependency on pyyaml even
though test_workflow_models.py imports yaml; update the py_test target (the name
"test_workflow_models") to include requirement("pyyaml") in its deps list so the
test declares the YAML requirement explicitly.
src/utils/job/tests/test_kb_objects_unit.py (1)

259-259: Move in-function imports to module scope.

Imports at Line 259, Line 264, Line 270-271, and Line 281-282 violate the module import rule and should be relocated to top-level imports.

As per coding guidelines, "All imports must be at the top level of the module after the module docstring. No exceptions: imports inside functions are not allowed."

Also applies to: 264-264, 270-271, 281-282

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/tests/test_kb_objects_unit.py` at line 259, Move the
in-function imports to module scope: take the imports currently performed inside
test_kb_objects_unit.py (for example CustomObjectMetadataStub and the other
imports located at the spots you flagged) and place them at the top of the
module immediately after the module docstring so they become top-level imports;
update the test functions to use those names directly (no local imports) and run
tests to ensure nothing else relies on late import side-effects.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/utils/job/tests/test_kb_objects_unit.py`:
- Line 179: Replace broad Exception assertions with the concrete exception types
thrown by the code: change the test that expects FileMount.digest_validator to
raise Exception to assertRaises(ValueError) (reference
FileMount.digest_validator) and change the test that expects
get_k8s_object_factory to raise Exception for unsupported scheduler types to
assertRaises(osmo_errors.OSMOServerError) (reference get_k8s_object_factory and
osmo_errors.OSMOServerError) so the tests assert the specific errors actually
raised.

In `@src/utils/job/tests/test_task_models.py`:
- Line 279: Replace the broad with self.assertRaises(Exception): usages in these
tests with the specific pydantic.ValidationError to ensure only model validation
failures are caught; update each occurrence of the exact snippet "with
self.assertRaises(Exception):" (including the other occurrences noted) to "with
self.assertRaises(pydantic.ValidationError):" and add/import
pydantic.ValidationError at the top of the test module if it's not already
imported. Ensure you change all listed instances so the tests fail on unexpected
exception types.

---

Nitpick comments:
In `@src/utils/job/tests/BUILD`:
- Around line 137-144: The py_test target named "test_workflow_models" is
missing an explicit runtime dependency on pyyaml even though
test_workflow_models.py imports yaml; update the py_test target (the name
"test_workflow_models") to include requirement("pyyaml") in its deps list so the
test declares the YAML requirement explicitly.

In `@src/utils/job/tests/test_kb_objects_unit.py`:
- Line 259: Move the in-function imports to module scope: take the imports
currently performed inside test_kb_objects_unit.py (for example
CustomObjectMetadataStub and the other imports located at the spots you flagged)
and place them at the top of the module immediately after the module docstring
so they become top-level imports; update the test functions to use those names
directly (no local imports) and run tests to ensure nothing else relies on late
import side-effects.

In `@src/utils/job/tests/test_task_models.py`:
- Line 345: Several tests in test_task_models.py contain method-local imports
(e.g., import base64 plus the other imports flagged) that must be moved to
module scope; relocate those imports out of the test functions to the top of the
module (immediately after the module docstring) and remove the in-function
import statements so functions like the tests referencing base64 use the
top-level imports instead.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2c296b86-347f-428c-b684-4005700d802d

📥 Commits

Reviewing files that changed from the base of the PR and between 9af7c66 and d93d9d3.

📒 Files selected for processing (7)
  • src/utils/job/tests/BUILD
  • src/utils/job/tests/test_backend_job_defs.py
  • src/utils/job/tests/test_common_utils.py
  • src/utils/job/tests/test_jobs_base_unit.py
  • src/utils/job/tests/test_kb_objects_unit.py
  • src/utils/job/tests/test_task_models.py
  • src/utils/job/tests/test_workflow_models.py

self.assertTrue(fm.name.startswith('osmo-'))

def test_digest_cannot_be_set(self):
with self.assertRaises(Exception):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cd src/utils/job/tests && wc -l test_kb_objects_unit.py

Repository: NVIDIA/OSMO

Length of output: 82


🏁 Script executed:

cd src/utils/job/tests && sed -n '175,185p' test_kb_objects_unit.py

Repository: NVIDIA/OSMO

Length of output: 462


🏁 Script executed:

cd src/utils/job/tests && sed -n '245,255p' test_kb_objects_unit.py

Repository: NVIDIA/OSMO

Length of output: 544


🏁 Script executed:

find . -name "kb_objects.py" -type f

Repository: NVIDIA/OSMO

Length of output: 84


🏁 Script executed:

rg -l "class FileMount" --type py

Repository: NVIDIA/OSMO

Length of output: 126


🏁 Script executed:

rg -l "def get_k8s_object_factory" --type py

Repository: NVIDIA/OSMO

Length of output: 104


🏁 Script executed:

wc -l src/utils/job/kb_objects.py

Repository: NVIDIA/OSMO

Length of output: 86


🏁 Script executed:

rg -A 30 "class FileMount" src/utils/job/kb_objects.py

Repository: NVIDIA/OSMO

Length of output: 1205


🏁 Script executed:

rg -A 20 "def get_k8s_object_factory" src/utils/job/kb_objects.py

Repository: NVIDIA/OSMO

Length of output: 723


🏁 Script executed:

rg "import.*osmo_errors|from.*osmo_errors" src/utils/job/kb_objects.py

Repository: NVIDIA/OSMO

Length of output: 141


Use concrete exception types in assertions instead of broad Exception.

At Line 179, assertRaises(Exception) should be assertRaises(ValueError) since the FileMount.digest_validator raises ValueError when digest is set. At Line 250, assertRaises(Exception) should be assertRaises(osmo_errors.OSMOServerError) since get_k8s_object_factory raises that specific error for unsupported scheduler types. Using concrete exceptions keeps tests strict and improves diagnostic clarity.

🧰 Tools
🪛 Ruff (0.15.7)

[warning] 179-179: Do not assert blind exception: Exception

(B017)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/tests/test_kb_objects_unit.py` at line 179, Replace broad
Exception assertions with the concrete exception types thrown by the code:
change the test that expects FileMount.digest_validator to raise Exception to
assertRaises(ValueError) (reference FileMount.digest_validator) and change the
test that expects get_k8s_object_factory to raise Exception for unsupported
scheduler types to assertRaises(osmo_errors.OSMOServerError) (reference
get_k8s_object_factory and osmo_errors.OSMOServerError) so the tests assert the
specific errors actually raised.

self.assertEqual(tio.regex, r'.*\.txt')

def test_invalid_regex_raises(self):
with self.assertRaises(Exception):
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

wc -l src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 98


🏁 Script executed:

sed -n '270,290p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 800


🏁 Script executed:

sed -n '300,320p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 828


🏁 Script executed:

sed -n '330,350p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 845


🏁 Script executed:

sed -n '375,395p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 831


🏁 Script executed:

sed -n '400,420p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 919


🏁 Script executed:

find src/utils/job -name "*.py" -not -path "*/tests/*" | head -20

Repository: NVIDIA/OSMO

Length of output: 367


🏁 Script executed:

fd -t f "\.py$" src/utils/job/ -not -path "*/tests/*"

Repository: NVIDIA/OSMO

Length of output: 285


🏁 Script executed:

wc -l src/utils/job/task.py src/utils/job/task_io.py

Repository: NVIDIA/OSMO

Length of output: 128


🏁 Script executed:

head -100 src/utils/job/task.py

Repository: NVIDIA/OSMO

Length of output: 2924


🏁 Script executed:

rg "def __init__|raise|except" src/utils/job/task.py -A 2 | head -100

Repository: NVIDIA/OSMO

Length of output: 3170


🏁 Script executed:

cat src/utils/job/task_io.py

Repository: NVIDIA/OSMO

Length of output: 2956


🏁 Script executed:

rg "raise|except" src/utils/job/task_io.py -B 2 -A 2

Repository: NVIDIA/OSMO

Length of output: 225


🏁 Script executed:

rg "class TaskInputOutput|class URLInputOutput|class File|class CheckpointSpec|class TaskSpec" src/utils/job/task.py -A 20 | head -200

Repository: NVIDIA/OSMO

Length of output: 3599


🏁 Script executed:

rg "class TaskInputOutput" src/utils/job/task.py -A 30

Repository: NVIDIA/OSMO

Length of output: 1085


🏁 Script executed:

rg "class TaskSpec" src/utils/job/task.py -A 100 | grep -E "@pydantic.validator|raise ValueError" | head -40

Repository: NVIDIA/OSMO

Length of output: 493


🏁 Script executed:

sed -n '700,900p' src/utils/job/task.py | rg "@pydantic.validator|def validate|raise ValueError" -A 3

Repository: NVIDIA/OSMO

Length of output: 871


🏁 Script executed:

sed -n '275,285p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 460


🏁 Script executed:

sed -n '306,316p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 445


🏁 Script executed:

sed -n '332,346p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 680


🏁 Script executed:

sed -n '379,390p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 441


🏁 Script executed:

sed -n '400,420p' src/utils/job/tests/test_task_models.py

Repository: NVIDIA/OSMO

Length of output: 919


Narrow assertRaises(Exception) to pydantic.ValidationError.

These negative tests currently accept any exception, which can hide unrelated regressions. Since all these constructor calls trigger pydantic model validation, they raise pydantic.ValidationError when validators fail. Assert the concrete exception type for each validation path.

Also applies to: 310-310, 336-336, 340-340, 383-383, 404-404, 408-408, 412-412

🧰 Tools
🪛 Ruff (0.15.7)

[warning] 279-279: Do not assert blind exception: Exception

(B017)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/job/tests/test_task_models.py` at line 279, Replace the broad with
self.assertRaises(Exception): usages in these tests with the specific
pydantic.ValidationError to ensure only model validation failures are caught;
update each occurrence of the exact snippet "with self.assertRaises(Exception):"
(including the other occurrences noted) to "with
self.assertRaises(pydantic.ValidationError):" and add/import
pydantic.ValidationError at the top of the test module if it's not already
imported. Ensure you change all listed instances so the tests fail on unexpected
exception types.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant